对训练有素的ML模型进行连续监控,以确定其预测何时应该和不应信任的预测对于他们的安全部署至关重要。这样的框架应该是高性能,可解释的,事后和可行的。我们提出了信任范围,这是连续模型监视的“不信任”评分框架。我们使用一系列潜在空间嵌入序列评估每个输入样本模型预测的可信度。具体而言,(a)我们的潜在空间不信任得分估计了潜在空间中的距离指标(马哈拉氏症距离)和相似性指标(余弦相似性),并且(b)我们的顺序不信任得分决定了过去输入顺序的相关性偏差非参数基于滑动窗口的表示,用于可操作的连续监视。我们通过两个下游任务评估信任量:(1)分布转移的输入检测和(2)数据漂移检测,跨越不同的域 - 使用公共数据集的音频和视觉,并进一步基准了我们在具有挑战性的现实,现实世界中的脑电图(EEG)(EEG)(EEG) )数据集用于癫痫发作。我们的潜在空间不信任得分以84.1(视觉),73.9(音频),77.1(临床脑电图)的AUROCs获得最新的结果,优于10分以上。我们暴露了对输入语义内容不敏感的流行基线中的关键故障,使它们不适合现实世界模型监视。我们表明,我们的顺序不信任得分达到了高漂移检测率:超过90%的流显示所有域的误差<20%。通过广泛的定性和定量评估,我们表明我们的不信任分数更强大,并为轻松采用实践提供了解释性。
translated by 谷歌翻译
Explainability has been widely stated as a cornerstone of the responsible and trustworthy use of machine learning models. With the ubiquitous use of Deep Neural Network (DNN) models expanding to risk-sensitive and safety-critical domains, many methods have been proposed to explain the decisions of these models. Recent years have also seen concerted efforts that have shown how such explanations can be distorted (attacked) by minor input perturbations. While there have been many surveys that review explainability methods themselves, there has been no effort hitherto to assimilate the different methods and metrics proposed to study the robustness of explanations of DNN models. In this work, we present a comprehensive survey of methods that study, understand, attack, and defend explanations of DNN models. We also present a detailed review of different metrics used to evaluate explanation methods, as well as describe attributional attack and defense methods. We conclude with lessons and take-aways for the community towards ensuring robust explanations of DNN model predictions.
translated by 谷歌翻译
Current research on users` perspectives of cyber security and privacy related to traditional and smart devices at home is very active, but the focus is often more on specific modern devices such as mobile and smart IoT devices in a home context. In addition, most were based on smaller-scale empirical studies such as online surveys and interviews. We endeavour to fill these research gaps by conducting a larger-scale study based on a real-world dataset of 413,985 tweets posted by non-expert users on Twitter in six months of three consecutive years (January and February in 2019, 2020 and 2021). Two machine learning-based classifiers were developed to identify the 413,985 tweets. We analysed this dataset to understand non-expert users` cyber security and privacy perspectives, including the yearly trend and the impact of the COVID-19 pandemic. We applied topic modelling, sentiment analysis and qualitative analysis of selected tweets in the dataset, leading to various interesting findings. For instance, we observed a 54% increase in non-expert users` tweets on cyber security and/or privacy related topics in 2021, compared to before the start of global COVID-19 lockdowns (January 2019 to February 2020). We also observed an increased level of help-seeking tweets during the COVID-19 pandemic. Our analysis revealed a diverse range of topics discussed by non-expert users across the three years, including VPNs, Wi-Fi, smartphones, laptops, smart home devices, financial security, and security and privacy issues involving different stakeholders. Overall negative sentiment was observed across almost all topics non-expert users discussed on Twitter in all the three years. Our results confirm the multi-faceted nature of non-expert users` perspectives on cyber security and privacy and call for more holistic, comprehensive and nuanced research on different facets of such perspectives.
translated by 谷歌翻译